|
In mathematics, the T(1) theorem, first proved by , describes when an operator ''T'' given by a kernel can be extended to a bounded linear operator on the Hilbert space ''L''2(R''n''). The name ''T''(1) theorem refers to a condition on the distribution ''T''(1), given by the operator ''T'' applied to the function 1. ==Statement== Suppose that ''T'' is a continuous operator from Schwartz functions on R''n'' to tempered distributions, so that ''T'' is given by a kernel ''K'' which is a distribution. Assume that the kernel is standard, which means that off the diagonal it is given by a function satisfying certain conditions. Then the ''T''(1) theorem states that ''T'' can be extended to a bounded operator on the Hilbert space ''L''2(R''n'') if and only if the following conditions are satisfied: *''T''(1) is of bounded mean oscillation (where ''T'' is extended to an operator on bounded smooth functions, such as 1). *''T'' *(1) is of bounded mean oscillation, where ''T'' * is the adjoint of ''T''. *''T'' is weakly bounded, a weak condition that is easy to verify in practice. 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「T(1) theorem」の詳細全文を読む スポンサード リンク
|